Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
CoDiCast: Conditional Diffusion Model for Global Weather Forecasting with Uncertainty QuantificationAccurate weather forecasting is critical for science and society. However, existing methods have not achieved the combination of high accuracy, low uncertainty, and high computational efficiency simultaneously. On one hand, traditional numerical weather prediction (NWP) models are computationally intensive because of their complexity. On the other hand, most machine learning-based weather prediction (MLWP) approaches offer efficiency and accuracy but remain deterministic, lacking the ability to capture forecast uncertainty. To tackle these challenges, we propose a conditional diffusion model, CoDiCast, to generate global weather prediction, integrating accuracy and uncertainty quantification at a modest computational cost. The key idea behind the prediction task is to generate realistic weather scenarios at a future time point, conditioned on observations from the recent past. Due to the probabilistic nature of diffusion models, they can be properly applied to capture the uncertainty of weather predictions. Therefore, we accomplish uncertainty quantifications by repeatedly sampling from stochastic Gaussian noise for each initial weather state and running the denoising process multiple times. Experimental results demonstrate that CoDiCast outperforms several existing MLWP methods in accuracy, and is faster than NWP models in inference speed. Our model can generate 6-day global weather forecasts, at 6-hour steps and 5.625-degree latitude-longitude resolutions, for over 5 variables, in about 12 minutes on a commodity A100 GPU machine with 80GB memory. The source code is available at https://github.com/JimengShi/CoDiCast.more » « lessFree, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available October 1, 2026
-
In coastal river systems, floods, often during major storms or king tides, severely threaten lives and property. However, hydraulic structures such as dams, gates, pumps, and reservoirs exist in these river systems, and these floods can be mitigated or even prevented by strategically releasing water before extreme weather events. A standard approach used by local water management agencies is the “rule-based” method, which specifies predetermined water prereleases based on historical human experience, but which tends to result in excessive or inadequate water release. Iterative optimization methods that rely on detailed physics-based models for prediction are an alternative approach. Whereas, such methods tend to be computationally intensive, requiring hours or even days to solve the problem optimally. In this paper, we propose a Forecast Informed Deep Learning Architecture, FIDLAR, to achieve rapid and near-optimal flood management with precise water prereleases. FIDLAR seamlessly integrates two neural network modules: one called the Flood Manager, which is responsible for generating water pre-release schedules, and another called the Flood Evaluator, which evaluates those generated schedules. The Evaluator module is pre-trained separately, and its gradient-based feedback is utilized to train the Manager model, ensuring near-optimal water pre-releases. We have conducted experiments with a flood-prone coastal area in South Florida. Results show that FIDLAR is several orders of magnitude faster than currently used physics-based approaches while outperforming baseline methods with improved water pre-release schedules.more » « lessFree, publicly-accessible full text available April 11, 2026
-
Explaining deep learning models operating on time series data is crucial in various applications of interest which require interpretable and transparent insights from time series signals. In this work, we investigate this problem from an information theoretic perspective and show that most existing measures of explainability may suffer from trivial solutions and distributional shift issues. To address these issues, we introduce a simple yet practical objective function for time series explainable learning. The design of the objective function builds upon the principle of information bottleneck (IB), and modifies the IB objective function to avoid trivial solutions and distributional shift issues. We further present TimeX++, a novel explanation framework that leverages a parametric network to produce explanation-embedded instances that are both in-distributed and label-preserving. We evaluate TimeX++ on both synthetic and real-world datasets comparing its performance against leading baselines, and validate its practical efficacy through case studies in a real-world environmental application. Quantitative and qualitative evaluations show that TimeX++ outperforms baselines across all datasets, demonstrating a substantial improvement in explanation quality for time series data.more » « less
An official website of the United States government

Full Text Available